图形神经网络(GNNS)已被证明是在预测建模任务中的Excel,其中底层数据是图形。然而,由于GNN广泛用于人以人为本的应用,因此出现了公平性问题。虽然边缘删除是用于促进GNNS中公平性的常用方法,但是当数据本质上缺少公平连接时,它就无法考虑。在这项工作中,我们考虑未删除的边缘添加方法,促进公平。我们提出了两个模型 - 不可知的算法来执行边缘编辑:蛮力方法和连续近似方法,公平。Fairedit通过利用公平损失的梯度信息来执行有效的边缘编辑,以找到改善公平性的边缘。我们发现Fairedit优于许多数据集和GNN方法的标准培训,同时表现了许多最先进的方法,展示了公平的能力,以改善许多领域和模型的公平性。
translated by 谷歌翻译
We propose the first joint audio-video generation framework that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i.e., MM-Diffusion), with two-coupled denoising autoencoders. In contrast to existing single-modal diffusion models, MM-Diffusion consists of a sequential multi-modal U-Net for a joint denoising process by design. Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises. To ensure semantic consistency across modalities, we propose a novel random-shift based attention block bridging over the two subnets, which enables efficient cross-modal alignment, and thus reinforces the audio-video fidelity for each other. Extensive experiments show superior results in unconditional audio-video generation, and zero-shot conditional tasks (e.g., video-to-audio). In particular, we achieve the best FVD and FAD on Landscape and AIST++ dancing datasets. Turing tests of 10k votes further demonstrate dominant preferences for our model. The code and pre-trained models can be downloaded at https://github.com/researchmm/MM-Diffusion.
translated by 谷歌翻译
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher model to promote a smaller student model. Existing efforts guide the distillation by matching their prediction logits, feature embedding, etc., while leaving how to efficiently utilize them in junction less explored. In this paper, we propose Hint-dynamic Knowledge Distillation, dubbed HKD, which excavates the knowledge from the teacher' s hints in a dynamic scheme. The guidance effect from the knowledge hints usually varies in different instances and learning stages, which motivates us to customize a specific hint-learning manner for each instance adaptively. Specifically, a meta-weight network is introduced to generate the instance-wise weight coefficients about knowledge hints in the perception of the dynamical learning progress of the student model. We further present a weight ensembling strategy to eliminate the potential bias of coefficient estimation by exploiting the historical statics. Experiments on standard benchmarks of CIFAR-100 and Tiny-ImageNet manifest that the proposed HKD well boost the effect of knowledge distillation tasks.
translated by 谷歌翻译
We investigate composed image retrieval with text feedback. Users gradually look for the target of interest by moving from coarse to fine-grained feedback. However, existing methods merely focus on the latter, i.e, fine-grained search, by harnessing positive and negative pairs during training. This pair-based paradigm only considers the one-to-one distance between a pair of specific points, which is not aligned with the one-to-many coarse-grained retrieval process and compromises the recall rate. In an attempt to fill this gap, we introduce a unified learning approach to simultaneously modeling the coarse- and fine-grained retrieval by considering the multi-grained uncertainty. The key idea underpinning the proposed method is to integrate fine- and coarse-grained retrieval as matching data points with small and large fluctuations, respectively. Specifically, our method contains two modules: uncertainty modeling and uncertainty regularization. (1) The uncertainty modeling simulates the multi-grained queries by introducing identically distributed fluctuations in the feature space. (2) Based on the uncertainty modeling, we further introduce uncertainty regularization to adapt the matching objective according to the fluctuation range. Compared with existing methods, the proposed strategy explicitly prevents the model from pushing away potential candidates in the early stage, and thus improves the recall rate. On the three public datasets, i.e., FashionIQ, Fashion200k, and Shoes, the proposed method has achieved +4.03%, + 3.38%, and + 2.40% Recall@50 accuracy over a strong baseline, respectively.
translated by 谷歌翻译
Modern supervised learning neural network models require a large amount of manually labeled data, which makes the construction of domain-specific knowledge graphs time-consuming and labor-intensive. In parallel, although there has been much research on named entity recognition and relation extraction based on distantly supervised learning, constructing a domain-specific knowledge graph from large collections of textual data without manual annotations is still an urgent problem to be solved. In response, we propose an integrated framework for adapting and re-learning knowledge graphs from one coarse domain (biomedical) to a finer-define domain (oncology). In this framework, we apply distant-supervision on cross-domain knowledge graph adaptation. Consequently, no manual data annotation is required to train the model. We introduce a novel iterative training strategy to facilitate the discovery of domain-specific named entities and triples. Experimental results indicate that the proposed framework can perform domain adaptation and construction of knowledge graph efficiently.
translated by 谷歌翻译
利用多模式融合,尤其是在摄像头和激光雷达之间,对于为自动驾驶汽车构建准确且健壮的3D对象检测系统已经至关重要。直到最近,点装饰方法(在该点云中都用相机功能增强,一直是该领域的主要方法。但是,这些方法无法利用来自相机的较高分辨率图像。还提出了最近将摄像头功能投射到鸟类视图(BEV)融合空间的作品,但是它们需要预计数百万像素,其中大多数仅包含背景信息。在这项工作中,我们提出了一种新颖的方法中心功能融合(CFF),其中我们利用相机和激光雷达中心的基于中心的检测网络来识别相关对象位置。然后,我们使用基于中心的检测来识别与对象位置相关的像素功能的位置,这是图像中总数的一小部分。然后将它们投射并融合在BEV框架中。在Nuscenes数据集上,我们的表现优于仅限激光雷达基线的4.9%地图,同时比其他融合方法融合了100倍。
translated by 谷歌翻译
基于预先训练的语言模型(PRLMS)在源代码理解任务中取得的巨大成功,当前的文献研究要么进一步改善PRLM的性能(概括)或对对抗性攻击的鲁棒性。但是,他们必须在这两个方面之间的权衡方面妥协,而且它们都没有考虑以有效和实用的方式改善双方。为了填补这一空白,我们建议使用语义保护对抗代码嵌入(空间),以找到最坏的传播语义保留攻击,同时迫使模型在这些最坏情况下预测正确的标签。实验和分析表明,在提高PRLMS代码的性能的同时,空间可以保持强大的防御性攻击。
translated by 谷歌翻译
AI Illustrator旨在自动设计具有视觉吸引力的图像,以激发丰富的思想和情感。为了实现这一目标,我们提出了一个框架,将具有复杂语义的原始描述转换为语义相应的图像。主要的挑战在于原始描述语义的复杂性,可能很难可视化(\ textit {e}。通常,它对现有方法构成了处理此类描述的挑战。为了解决这个问题,我们建议基于rompt \ textbf {c} ross- \ textbf {m} odal generation \ textbf {frame} work(pcm-frame)利用两个强大的预培养模型,,包括剪辑和Stylegan。我们的框架由两个组件组成:\ textIt {textIt嵌入} s到\ textit {image嵌入} s的投影模块,基于提示以及一个构建的适应图像生成模块,该模块构建了\ textit {image嵌入{image Embedding} s作为输入并受到共同语义一致性损失的训练。为了弥合现实图像和插图设计之间的差距,我们进一步采用了风格化模型作为后处理,以获得更好的视觉效果。受益于预先训练的模型,我们的方法可以处理复杂的描述,并且不需要外部配对数据进行培训。此外,我们已经建立了一个由200个原始描述组成的基准。我们进行了一项用户研究,以证明我们对复杂文本的竞争方法的优势。我们在https://github.com/researchmm/ai \ _illustrator} {https://github.com/researchmem/researchmm/ai \_illustrator上发布代码
translated by 谷歌翻译
现有的视频框架插值方法只能在给定的中间时间步骤中插值框架,例如1/2。在本文中,我们旨在探索一种更广泛的视频框架插值,该视频框架在任意时步。为此,我们考虑在元学习的帮助下以统一的方式处理不同的时间阶段。具体而言,我们开发了一个双元学习的帧插值框架,以通过上下文信息和光流的指导以及将时间步长为附带信息,将中间框架合成中间框架。首先,构建了一个内容感知的元学习流程模块,以提高基于输入帧的下采样版本的光流估计的准确性。其次,以精致的光流和时间步长为输入,运动吸引的元学习框架插值模块为在粗翘曲版本的特征图上使用的每个像素生成卷积内核,以生成输入的特征图上的每个像素生成预测帧的帧。广泛的定性和定量评估以及消融研究表明,通过以如此精心设计的方式在我们的框架中引入元学习,我们的方法不仅可以实现优于先进的框架插值方法,还可以实现优越的性能还拥有在任意时间步长以支持插值的扩展能力。
translated by 谷歌翻译
点云的3D场景流量估计是计算机视觉中的低级3D运动感知任务。流嵌入是场景流估计中的一种常用技术,它编码两个连续帧之间的点运动。因此,对于流动嵌入捕获运动的正确总体方向是至关重要的。但是,以前的作品仅在本地搜索以确定软信号,而忽略了遥远的点,而遥远的点是实际匹配的点。另外,估计的对应关系通常来自相邻点云的正向,并且可能与从向后方向获得的估计对应关系不一致。为了解决这些问题,我们提出了一个新颖的全能嵌入层,并在初始场景流量估计期间具有向后的可靠性验证。此外,我们研究并比较了3D场景流网络的关键组件中的几个设计选择,包括点相似度计算,预测变量的输入元素以及预测变量和改进级别的设计。仔细选择了最有效的设计后,我们能够提出一个模型,该模型可以在FlyingThings3D和Kitti场景流数据集上实现最新性能。我们提出的模型超过了所有现有方法的FlyterThings3D数据集至少38.2%,而EPE3D Metric的Kitti场景流数据集则超过了24.7%。我们在https://github.com/irmvlab/3dflow上发布代码。
translated by 谷歌翻译